Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 56
Filtrar
1.
Sci Rep ; 14(1): 7154, 2024 03 26.
Artigo em Inglês | MEDLINE | ID: mdl-38531923

RESUMO

Due to the intricate relationship between the small non-coding ribonucleic acid (miRNA) sequences, the classification of miRNA species, namely Human, Gorilla, Rat, and Mouse is challenging. Previous methods are not robust and accurate. In this study, we present AtheroPoint's GeneAI 3.0, a powerful, novel, and generalized method for extracting features from the fixed patterns of purines and pyrimidines in each miRNA sequence in ensemble paradigms in machine learning (EML) and convolutional neural network (CNN)-based deep learning (EDL) frameworks. GeneAI 3.0 utilized five conventional (Entropy, Dissimilarity, Energy, Homogeneity, and Contrast), and three contemporary (Shannon entropy, Hurst exponent, Fractal dimension) features, to generate a composite feature set from given miRNA sequences which were then passed into our ML and DL classification framework. A set of 11 new classifiers was designed consisting of 5 EML and 6 EDL for binary/multiclass classification. It was benchmarked against 9 solo ML (SML), 6 solo DL (SDL), 12 hybrid DL (HDL) models, resulting in a total of 11 + 27 = 38 models were designed. Four hypotheses were formulated and validated using explainable AI (XAI) as well as reliability/statistical tests. The order of the mean performance using accuracy (ACC)/area-under-the-curve (AUC) of the 24 DL classifiers was: EDL > HDL > SDL. The mean performance of EDL models with CNN layers was superior to that without CNN layers by 0.73%/0.92%. Mean performance of EML models was superior to SML models with improvements of ACC/AUC by 6.24%/6.46%. EDL models performed significantly better than EML models, with a mean increase in ACC/AUC of 7.09%/6.96%. The GeneAI 3.0 tool produced expected XAI feature plots, and the statistical tests showed significant p-values. Ensemble models with composite features are highly effective and generalized models for effectively classifying miRNA sequences.


Assuntos
Aprendizado Profundo , MicroRNAs , Humanos , Animais , Camundongos , Ratos , Nucleotídeos , Reprodutibilidade dos Testes , Área Sob a Curva
2.
Adv Neurobiol ; 36: 429-444, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38468046

RESUMO

Several natural phenomena can be described by studying their statistical scaling patterns, hence leading to simple geometrical interpretation. In this regard, fractal geometry is a powerful tool to describe the irregular or fragmented shape of natural features, using spatial or time-domain statistical scaling laws (power-law behavior) to characterize real-world physical systems. This chapter presents some works on the usefulness of fractal features, mainly the fractal dimension and the related Hurst exponent, in the characterization and identification of pathologies and radiological features in neuroimaging, mainly, magnetic resonance imaging.


Assuntos
Fractais , Neuroimagem , Humanos , Imageamento por Ressonância Magnética
4.
HERD ; : 19375867231225395, 2024 Jan 24.
Artigo em Inglês | MEDLINE | ID: mdl-38264993

RESUMO

OBJECTIVE: The aim of this study is to analyze the consistency, variability, and potential standardization of terminology used to describe architectural variables (AVs) and health outcomes in evidence-based design (EBD) studies. BACKGROUND: In EBD research, consistent terminology is crucial for studying the effects of AVs on health outcomes. However, there is a possibility that diverse terms have been used by researchers, which could lead to potential confusion and inconsistencies. METHODS: Three recent large systematic reviews were used as a source of publications, and 105 were extracted. The analysis aimed to extract a list of the terms used to refer to the unique concepts of AVs and health outcomes, with a specific focus on people with dementia. Each term's frequency was calculated, and statistical tests, including the χ2 and the post hoc test, were employed to compare their distributions. RESULTS: The study identified representative terms for AVs and health outcomes, revealing the variability in terminology usage within EBD field for dementia-friendly design. The comparative analysis of the identified terms highlighted patterns of frequency and distribution, shedding light on potential areas for standardization. CONCLUSIONS: The findings emphasize the need for standardized terminologies in EBD to improve communication, collaboration, and knowledge synthesis. Standardization of terminology can facilitate research comparability, enhance the generalizability of findings by creating a common language across studies and practitioners, and support the development of EBD guidelines. The study contributes to the ongoing discourse on standardizing terminologies in the field and provides insights into strategies for achieving consensus among researchers, practitioners, and stakeholders in health environmental research.

5.
Artigo em Inglês | MEDLINE | ID: mdl-38200715

RESUMO

Out of the 166 articles published in Journal of Industrial Microbiology and Biotechnology (JIMB) in 2019-2020 (not including special issues or review articles), 51 of them used a statistical test to compare two or more means. The most popular test was the (Standard) t-test, which often was used to compare several pairs of means. Other statistical procedures used included Fisher's least significant difference (LSD), Tukey's honest significant difference (HSD), and Welch's t-test; and to a lesser extent Bonferroni, Duncan's Multiple Range, Student-Newman-Keuls, and Kruskal-Wallis tests. This manuscript examines the performance of some of these tests with simulated experimental data, typical of those reported by JIMB authors. The results show that many of the most common procedures used by JIMB authors result in statistical conclusions that are prone to have large false positive (Type I) errors. These error-prone procedures included the multiple t-test, multiple Welch's t-test, and Fisher's LSD. These multiple comparisons procedures were compared with alternatives (Fisher-Hayter, Tukey's HSD, Bonferroni, and Dunnett's t-test) that were able to better control Type I errors. NON-TECHNICAL SUMMARY: The aim of this work was to review and recommend statistical procedures for Journal of Industrial Microbiology and Biotechnology authors who often compare the effect of several treatments on microorganisms and their functions.


Assuntos
Microbiologia Industrial , Publicações Periódicas como Assunto
6.
PeerJ Comput Sci ; 9: e1557, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38077609

RESUMO

The whale optimization algorithm (WOA) is a widely used metaheuristic optimization approach with applications in various scientific and industrial domains. However, WOA has a limitation of relying solely on the best solution to guide the population in subsequent iterations, overlooking the valuable information embedded in other candidate solutions. To address this limitation, we propose a novel and improved variant called Pbest-guided differential WOA (PDWOA). PDWOA combines the strengths of WOA, particle swarm optimizer (PSO), and differential evolution (DE) algorithms to overcome these shortcomings. In this study, we conduct a comprehensive evaluation of the proposed PDWOA algorithm on both benchmark and real-world optimization problems. The benchmark tests comprise 30-dimensional functions from CEC 2014 Test Functions, while the real-world problems include pressure vessel optimal design, tension/compression spring optimal design, and welded beam optimal design. We present the simulation results, including the outcomes of non-parametric statistical tests including the Wilcoxon signed-rank test and the Friedman test, which validate the performance improvements achieved by PDWOA over other algorithms. The results of our evaluation demonstrate the superiority of PDWOA compared to recent methods, including the original WOA. These findings provide valuable insights into the effectiveness of the proposed hybrid WOA algorithm. Furthermore, we offer recommendations for future research to further enhance its performance and open new avenues for exploration in the field of optimization algorithms. The MATLAB Codes of FISA are publicly available at https://github.com/ebrahimakbary/PDWOA.

7.
J Surg Educ ; 80(7): 1046-1052, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37142490

RESUMO

BACKGROUND: It is important for physicians to be familiar with statistical techniques commonly used in published medical research. Statistical errors in medical literature are common, and there is a reported lack of understanding regarding statistical knowledge necessary for data interpretation and journal reading. As study design has become increasingly complex, peer-reviewed literature poorly addresses and explains the most common statistical methods utilized across leading orthopedic journals. METHODS: Articles from 5 leading general and subspecialty orthopedic journals were compiled from 3 distinct time periods. After exclusions were applied, 9521 remained, and a random 5% sampling of these articles, balanced across journals and years, was conducted yielding 437 articles after additional exclusions. Information regarding the number of statistical tests used, power/sample size calculation, type of statistical tests used, level of evidence (LOE), study type, and study design was collected. RESULTS: The mean number of statistical tests across all 5 orthopedic journals increased from 1.39 to 2.29 by 2018 (p = 0.007). The percentage of articles that reported power/sample size analyses was not found to differ by year, but the value has increased from 2.6% in 1994 to 21.6% in 2018 (p = 0.081). The most commonly used statistical test was the t-test which was present in 20.5% of articles, followed by chi-square test (13%), Mann-Whitney analysis (12.6%) and analysis of variance (ANOVA, 9.6%). The mean number of tests was generally greater in articles from higher impact factor journals (p = 0.013). Studies with a LOE of I used the highest mean number of statistical tests (3.23) compared to studies with lower LOE ratings (range 1.66-2.69, p < 0.001). Randomized control trials used the highest mean number of statistical test (3.31), while case series used the lowest mean number of tests (1.57, p < 0.001). CONCLUSIONS: The mean number of statistical tests used per article has increased over the past 25 years with the t-test, chi-square test, Mann-Whitney analysis, and ANOVA being the most used statistical tests in leading orthopedic journals. Despite an increase in statistical tests it should be noted that there was a paucity in advance statistical testing within the orthopedic literature. This study displays important trends in data analysis and can serve as a guide to help clinicians and trainees better understand the statistics used in literature as well as identifying deficits within the literature that should be addressed to help progress the field of orthopedics.


Assuntos
Pesquisa Biomédica , Procedimentos Ortopédicos , Ortopedia , Publicações Periódicas como Assunto , Fator de Impacto de Revistas
8.
J Struct Biol ; 214(4): 107907, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36272694

RESUMO

Backbone dihedral angles ϕ and ψ are the main structural descriptors of proteins and peptides. The distribution of these angles has been investigated over decades as they are essential for the validation and refinement of experimental measurements, as well as for structure prediction and design methods. The dependence of these distributions, not only on the nature of each amino acid but also on that of the closest neighbors, has been the subject of numerous studies. Although neighbor-dependent distributions are nowadays generally accepted as a good model, there is still some controversy about the combined effects of left and right neighbors. We have investigated this question using rigorous methods based on recently-developed statistical techniques. Our results unambiguously demonstrate that the influence of left and right neighbors cannot be considered independently. Consequently, three-residue fragments should be considered as the minimal building blocks to investigate polypeptide sequence-structure relationships.


Assuntos
Peptídeos
9.
Sensors (Basel) ; 22(18)2022 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-36146336

RESUMO

The goal of this research is to accurately extract the parameters of the photovoltaic cells and panels and to reduce the extracting time. To this purpose, the barnacles mating optimizer algorithm is proposed for the first time to extract the parameters. To prove that the algorithm succeeds in terms of accuracy and quickness, it is applied to the following photovoltaic cells: monocrystalline silicon, amorphous silicon, RTC France, and the PWP201, Sharp ND-R250A5, and Kyocera KC200GT photovoltaic panels. The mathematical models used are single and double diodes. Datasets for these photovoltaic cells and panels were used, and the results obtained for the parameters were compared with the ones obtained using other published methods and algorithms. Six statistical tests were used to analyze the performance of the barnacles mating optimizer algorithm: the root mean square error mean, absolute percentage error, mean square error, mean absolute error, mean bias error, and mean relative error. The results of the statistical tests show that the barnacles mating optimizer algorithm outperforms several algorithms. The tests about the computational time were made using two computer configurations. Using the barnacles mating optimizer algorithm, the computational time decreases more than 30 times in comparison with one of the best algorithms, hybrid successive discretization algorithm.


Assuntos
Thoracica , Algoritmos , Animais , França , Modelos Teóricos , Silício
10.
Cytopathology ; 33(6): 663-667, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36017662

RESUMO

This article serves as the second in a series that offers recommendations for optimal data reporting, specifically focusing on statistical methods most frequently reported by the Cytopathology audience. The inaugural article, Recommendations for reporting statistical results when comparing proportions, dealt with the most common category of reported statistical tests over 2.5 years of Cytopathology articles-comparing proportions. Comparing samples using t tests, Mann-Whitney U, analysis of variance, and Kruskal-Wallis tests was another common category of statistical test reported among this audience. An important distinction between these tests is based on whether the samples follow a normal distribution. Therefore, Parametric or nonparametric statistical tests: Choosing the most appropriate option for your data is the second topic in the series. While this article will review considerations when selecting parametric or nonparametric statistical tests, an extensive review of each method is beyond the scope of this summary. The author encourages the reader to consult with a trained statistician to map out a thorough analytical plan (including their recommendations for the appropriate statistical test[s] to use) prior to data collection.


Assuntos
Estatísticas não Paramétricas , Humanos , Distribuição Normal
11.
Front Neural Circuits ; 16: 911245, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35669452

RESUMO

The study of the brain criticality hypothesis has been going on for about 20 years, various models and methods have been developed for probing this field, together with large amounts of controversial experimental findings. However, no standardized protocol of analysis has been established so far. Therefore, hoping to make some contributions to standardization of such analysis, we review several available tools used for estimating the criticality of the brain in this paper.


Assuntos
Encéfalo , Modelos Neurológicos
12.
Rev. habanera cienc. méd ; 21(1)feb. 2022.
Artigo em Espanhol | LILACS, CUMED | ID: biblio-1409455

RESUMO

Introducción: En muchas investigaciones científicas de las Ciencias Sociales, es usual la aplicación del método de consulta a expertos con el fin de validar escalas, enfoques, proyecciones, políticas y otros temas para la adopción de decisiones fundamentadas, aunque se constata su empleo parcial, indiscriminado e incorrecto. Objetivo: Fundamentar la necesidad de aplicar el método de consulta a expertos en tres niveles para determinar: la competencia de los expertos; la fiabilidad del instrumento empleado y el consenso entre los expertos sobre el tema abordado. Material y Métodos: Se realizó una sistematización de documentos relacionados con el empleo del método de consulta de expertos en la investigación científica, tesis de doctorado y maestría, artículos, monografías y otras publicaciones. Se aplicaron pruebas estadísticas no paramétricas. Resultados: Se constata que la aplicación del método de consulta a expertos siguiendo los procedimientos adecuados, permite elevar el rigor metodológico y su contribución en la investigación científica. Conclusiones: El método de consulta a expertos constituye una herramienta eficaz en la investigación científica siempre que se aplique con el rigor requerido(AU)


Introduction: The method for expert consultation is commonly applied to scientific research conducted in the Social Sciences to validate scales, approaches, planning, politics, and other matters in decision making; however, its incorrect, indiscriminate, and partial use is confirmed. Objective: To validate the need to apply the method for expert consultation at three levels in order to determine: the competence of experts; the reliability of the instrument used; and the consensus among experts on the topic approached. Material and Methods: Systematization of documents related to the use of the method for expert consultation in scientific research, master´s and doctoral theses, articles, monographs, and other publications. Non-parametric statistical tests were applied. Results: It is confirmed that the application of the method for expert consultation following the appropriate procedures allows for an increase in the methodological rigor and a contribution to scientific research. Conclusions: The method for expert consultation is an effective tool in scientific research provided that it is rigorously applied(AU)


Assuntos
Humanos
13.
J Pharm Biomed Anal ; 210: 114581, 2022 Feb 20.
Artigo em Inglês | MEDLINE | ID: mdl-35026592

RESUMO

Particle size distribution (PSD), spatial location and particle cluster size of ingredients, polymorphism, compositional distribution of a pharmaceutical product are few of the most important attributes in establishing the drug release-controlling microstructural and solid state properties that would be used to (re)design or reproduce similar products. There are numerous solid-state techniques available for PSD analysis. Laser diffraction (LD) is mostly used to study PSD of raw materials. However, a constraint of LD is the interference between the active pharmaceutical ingredients (API) and excipients, where it is very challenging to measure API size in a tablet. X-ray powder diffraction (XRPD) is widely employed in establishing the polymorphism of API and excipients. This research examined a commercial osmotic tablet in terms of extracting solid state properties of API and functional excipient by Raman Imaging. Establishing repeatability, reproducibility, and sample representativeness when the samples are non-uniform and inhomogeneous necessitates multiple measurements. In such scenarios, when employing imaging-based techniques, it can be time-consuming and tedious. Advanced statistical methodologies are used to overcome these disadvantages and expedite the characterization process. Overall, this study demonstrates that Raman imaging can be employed as a non-invasive and effective offline method for assessing the solid-state characteristics of API and functional excipients in complex dosage forms like osmotic tablets.


Assuntos
Excipientes , Análise Espectral Raman , Tamanho da Partícula , Reprodutibilidade dos Testes , Comprimidos
14.
Wirel Pers Commun ; 122(4): 3167-3204, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34518743

RESUMO

Constraints imposed due to the cameo of the novel corona virus has abruptly changed the operative mode of medical sciences. Most of the hospitals have migrated towards the telemedicine mode of services for the non-invasive and non-emergency patients during the COVID-19 time. The advent of telemedicine services has remotely rendered health services to different types of patients from their isolates. Here, the patients' medical data has to be transmitted to different physicians/doctors in a safe manner. Such data are to be secured with a view to restore its privacy clause. Cardio vascular diseases (CVDs) are a kind of cardiac disease related to blockage of arteries and veins. Cardiac patients are more susceptible to the COVID-19 attacks. They are advised to be treated though cardiac telemedicine services. This paper presents an intelligent and secured transmission of clinical cardiac reports of the patients through recurrence relation based session key. Such reports were made through the following confusion matrix operations. The beauty of this technique is that confusion matrices are transferred to specified number of cardiologists with additional secret shares encapsulation. The case of robustness checking, transparency and cryptographic engineering has been tested under different set of inputs. The total cryptographic time observed here was noted as 469.92  ms, 3 ms 74.45 , 502.88 ms, 361.38 ms, 493.12 ms, and 660.16 ms, which is acceptable when compared with other classical techniques. The estimation of correlation coefficient in proposed variables has been recorded as - 0.362 . Different types of result and its analysis proves the efficiency of the proposed technique. It will provide more security in medical data transmission, especially in the needy hours of COVID-19 pandemic.

15.
Pharm Stat ; 21(2): 345-360, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34608741

RESUMO

Combination therapies are increasingly adopted as the standard of care for various diseases to improve treatment response, minimise the development of resistance and/or minimise adverse events. Therefore, synergistic combinations are screened early in the drug discovery process, in which their potential is evaluated by comparing the observed combination effect to that expected under a null model. Such methodology is implemented in the BIGL R-package which allows for a quick screening of drug combinations. We extend the meanR and maxR tests from this package by allowing non-constant variance of the responses and by extending the list of null models (Loewe, Loewe2, HSA, Bliss). These new tests are evaluated in a comprehensive simulation study under various models for additivity and synergy, various monotherapeutic dose-response models (complete, partial and incomplete responders) and various types of deviation from the constant variance assumption. In addition, the BIGL package is extended with bootstrap confidence intervals for the individual off-axis points and for the overall synergy strength, which were demonstrated to have reliable coverage and can complement the existing tests. We conclude that the differences in performance between the different null models are small and depend on the simulation scenario. As a result, the choice of null model should be driven by expert knowledge on the particular problem. Finally, we demonstrate the new features of the BIGL package and the difference between the synergy models on a real dataset from drug discovery. The BIGL package is available at CRAN (https://CRAN.R-project.org/package=BIGL) and as a Shiny app (https://synergy.openanalytics.eu/app).


Assuntos
Descoberta de Drogas , Simulação por Computador , Combinação de Medicamentos , Descoberta de Drogas/métodos , Sinergismo Farmacológico , Humanos
16.
SN Comput Sci ; 2(6): 426, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34458859

RESUMO

The revolutionary and retrospective changes in the use of ICT have propelled the introduction of telecare health services in the crucial corona virus pandemic times. There have been revolutionary changes that happened with the advent of this novel corona virus. The proposed technique is based on secured glycemic information sharing between the server and users using artificial neural computational learning suite. Using symmetric Tree Parity Machines (TPMs) at the server and user ends, salp swarm-based session key has been generated for the proposed glycemic information modular encryption. The added taste of this paper is that without exchanging the entire session key, both TPMs will get full synchronized in terms of their weight vectors. With rise in the intake of highly rated Glycemic Indexed (GI) foods in today's COVID-19 lockdown lifestyle, it contributes a lot in the formation of cavities inside the periodontium, and several other diseases likes of COPD, Type I and Type II DM. GI-based food pyramid depicts the merit of the food in the top to bottom spread up approach. High GI food items helps in more co-morbid diseases in patients. It is recommended to have foods from the lower radars of the food pyramid. The proposed encryption with salp swarm-generated key has been more resistant to Man-In-The-Middle attacks. Different mathematical tests were carried on this proposed technique. The outcomes of those tests have proved its efficacy, an acceptance of the proposed technique. The total cryptographic time observed on four GI modules was 0.956 ms, 0.468 ms, 0.643 ms, and 0.771 ms.

17.
Proc Natl Acad Sci U S A ; 118(27)2021 07 06.
Artigo em Inglês | MEDLINE | ID: mdl-34215694

RESUMO

Electron-nuclear double resonance (ENDOR) measures the hyperfine interaction of magnetic nuclei with paramagnetic centers and is hence a powerful tool for spectroscopic investigations extending from biophysics to material science. Progress in microwave technology and the recent availability of commercial electron paramagnetic resonance (EPR) spectrometers up to an electron Larmor frequency of 263 GHz now open the opportunity for a more quantitative spectral analysis. Using representative spectra of a prototype amino acid radical in a biologically relevant enzyme, the [Formula: see text] in Escherichia coli ribonucleotide reductase, we developed a statistical model for ENDOR data and conducted statistical inference on the spectra including uncertainty estimation and hypothesis testing. Our approach in conjunction with 1H/2H isotopic labeling of [Formula: see text] in the protein unambiguously established new unexpected spectral contributions. Density functional theory (DFT) calculations and ENDOR spectral simulations indicated that these features result from the beta-methylene hyperfine coupling and are caused by a distribution of molecular conformations, likely important for the biological function of this essential radical. The results demonstrate that model-based statistical analysis in combination with state-of-the-art spectroscopy accesses information hitherto beyond standard approaches.


Assuntos
Estatística como Assunto , Aminoácidos/química , Simulação por Computador , Espectroscopia de Ressonância de Spin Eletrônica , Escherichia coli/enzimologia , Subunidades Proteicas/química , Ribonucleotídeo Redutases/química
18.
Surg Infect (Larchmt) ; 22(6): 597-603, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34270362

RESUMO

Background: Comparison of parameters between two or more groups forms the basis of hypothesis testing. Statistical tests (and statistical significance) are designed to report the likelihood the observed results are caused by chance alone, given that the null hypothesis is true. Methods: To demonstrate the concepts described, we utilized the Nationwide Inpatient Sample for patients admitted for emergency general surgery (EGS) and those admitted with non-EGS diagnoses. Depending on the type and distribution of individual variables, appropriate statistical tests were applied. Results: Comparison of numerical variables between two groups is begun with a simple correlation, depicted graphically in a scatterplot, and assessed statistically with either a Pearson or Spearman correlation coefficient. Normality of numerical variables is then assessed and in the case of normality, a t-test is applied when comparing two groups, and an analysis of variance (ANOVA) when comparing three or more groups. For data that are not distributed normally, a Wilcoxon rank sum (Mann-Whitney U) test may be used. For categorical variables, the χ2 test is used, unless cell counts are less than five, in which case the Fisher exact test is used. Importantly, both the ANOVA and χ2 test are used to assess for overall differences between two or more groups. Individual pair comparison tests, as well as adjusting for multiple comparisons must be used to identify differences between two specific groups when there are more than two groups. Conclusion: A basic understanding of statistical significance, and the type and distribution of variables is necessary to select the appropriate statistical test to compare data. Failure to understand these concepts may result in spurious conclusions.


Assuntos
Análise de Variância , Estatísticas não Paramétricas , Humanos
19.
Int J Radiat Biol ; 97(7): 888-905, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33970757

RESUMO

PURPOSE: In case of a mass-casualty radiological event, there would be a need for networking to overcome surge limitations and to quickly obtain homogeneous results (reported aberration frequencies or estimated doses) among biodosimetry laboratories. These results must be consistent within such network. Inter-laboratory comparisons (ILCs) are widely accepted to achieve this homogeneity. At the European level, a great effort has been made to harmonize biological dosimetry laboratories, notably during the MULTIBIODOSE and RENEB projects. In order to continue the harmonization efforts, the RENEB consortium launched this intercomparison which is larger than the RENEB network, as it involves 38 laboratories from 21 countries. In this ILC all steps of the process were monitored, from blood shipment to dose estimation. This exercise also aimed to evaluate the statistical tools used to compare laboratory performance. MATERIALS AND METHODS: Blood samples were irradiated at three different doses, 1.8, 0.4 and 0 Gy (samples A, C and B) with 4-MV X-rays at 0.5 Gy min-1, and sent to the participant laboratories. Each laboratory was requested to blindly analyze 500 cells per sample and to report the observed frequency of dicentric chromosomes per metaphase and the corresponding estimated dose. RESULTS: This ILC demonstrates that blood samples can be successfully distributed among laboratories worldwide to perform biological dosimetry in case of a mass casualty event. Having achieved a substantial harmonization in multiple areas among the RENEB laboratories issues were identified with the available statistical tools, which are not capable to advantageously exploit the richness of results of a large ILCs. Even though Z- and U-tests are accepted methods for biodosimetry ILCs, setting the number of analyzed metaphases to 500 and establishing a tests' common threshold for all studied doses is inappropriate for evaluating laboratory performance. Another problem highlighted by this ILC is the issue of the dose-effect curve diversity. It clearly appears that, despite the initial advantage of including the scoring specificities of each laboratory, the lack of defined criteria for assessing the robustness of each laboratory's curve is a disadvantage for the 'one curve per laboratory' model. CONCLUSIONS: Based on our study, it seems relevant to develop tools better adapted to the collection and processing of results produced by the participant laboratories. We are confident that, after an initial harmonization phase reached by the RENEB laboratories, a new step toward a better optimization of the laboratory networks in biological dosimetry and associated ILC is on the way.


Assuntos
Laboratórios , Radiometria , Aberrações Cromossômicas/efeitos da radiação , Humanos , Exposição à Radiação , Reprodutibilidade dos Testes
20.
J Environ Manage ; 280: 111707, 2021 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-33349512

RESUMO

The objectives of this study are: (i) to evaluate the space-temporal variability of fire foci by environmental satellites, CHIRPS and remote sensing products based on applied statistics, and (ii) to identify the relational pattern between the distribution of fire foci and the environmental, meteorological, and socioeconomic variables in the mesoregions of Minas Gerais (MG) - Brazil. This study used a time series of fire foci from 1998 to 2015 via BDQueimadas. The temporal record of fire foci was evaluated by Mann-Kendall (MK), Pettitt (P), Shapiro-Wilk (SW), and Bartlett (B) tests. The spatial distribution by burned area (MCD64A1-MODIS) and the Kernel density - (radius 20 km) were estimated. The environmental variables analyzed were: rainfall (mm) and maximum temperature (°C), besides proxies to vegetation canopy: NDVI, SAVI, and EVI. PCA was applied to explain the interaction between fire foci and demographic, environmental, and geographical variables for MG. The MK test indicated a significant increasing trend in fire foci in MG. The SW and B tests were significant for non-normality and homogeneity of data. The P test pointed to abrupt changes in the 2001 and 2002 cycles (El Niño and La Niña moderated), which contributes to the annual increase and in winter and spring, which is identified by the Kernel density maps. Burned areas highlighted the northern and northwestern regions of MG, Triângulo Mineiro, Jequitinhonha, and South/Southwest MG, in the 3rd quarter (increased 17%) and the 4th quarter (increased 88%). The PCA resulted in three PCs that explained 71.49% of the total variation. The SAVI was the variable that stood out, with 11.12% of the total variation, followed by Belo Horizonte, the most representative in MG. We emphasize that the applied conceptual theoretical model defined here can act in the environmental management of fire risk. However, public policies should follow the technical-scientific guidelines in the mitigation of the resulting socioeconomic - environmental damages.


Assuntos
Incêndios , Brasil , El Niño Oscilação Sul , Estações do Ano
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...